22 research outputs found

    Bosphorus database for 3d face analysis

    Get PDF
    Abstract. A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face occlusions are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis. 1

    The Multiscenario Multienvironment BioSecure Multimodal Database (BMDB)

    Get PDF
    A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008Comment: Published at IEEE Transactions on Pattern Analysis and Machine Intelligence journa

    The Multiscenario Multienvironment BioSecure Multimodal Database (BMDB)

    Get PDF
    A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1 over the Internet, 2 in an office environment with desktop PC, and 3 in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008

    Face Pose Alignment with Event Cameras

    No full text
    Event camera (EC) emerges as a bio-inspired sensor which can be an alternative or complementary vision modality with the benefits of energy efficiency, high dynamic range, and high temporal resolution coupled with activity dependent sparse sensing. In this study we investigate with ECs the problem of face pose alignment, which is an essential pre-processing stage for facial processing pipelines. EC-based alignment can unlock all these benefits in facial applications, especially where motion and dynamics carry the most relevant information due to the temporal change event sensing. We specifically aim at efficient processing by developing a coarse alignment method to handle large pose variations in facial applications. For this purpose, we have prepared by multiple human annotations a dataset of extreme head rotations with varying motion intensity. We propose a motion detection based alignment approach in order to generate activity dependent pose-events that prevents unnecessary computations in the absence of pose change. The alignment is realized by cascaded regression of extremely randomized trees. Since EC sensors perform temporal differentiation, we characterize the performance of the alignment in terms of different levels of head movement speeds and face localization uncertainty ranges as well as face resolution and predictor complexity. Our method obtained 2.7% alignment failure on average, whereas annotator disagreement was 1%. The promising coarse alignment performance on EC sensor data together with a comprehensive analysis demonstrate the potential of ECs in facial applications

    Facial Action Unit Detection: 3D versus 2D Modality

    No full text
    In human facial behavioral analysis, Action Unit (AU) coding is a powerful instrument to cope with the diversity of facial expressions. Almost all of the work in the literature for facial action recognition is based on 2D camera images. Given the performance limitations in AU detection with 2D data, 3D facial surface information appears as a viable alternative. 3D systems capture true facial surface data and are less disturbed by illumination and head pose. In this paper we extensively compare the use of 3D modality vis-à-vis 2D imaging modality for AU recognition. Surface data is converted into curvature data and mapped into 2D so that both modalities can be compared on a fair ground. Since the approach is totally data-driven, possible bias due to the design is avoided. Our experiments cover 25 AUs and is based on the comparison of Receiver Operating Characteristic (ROC) curves. We demonstrate that in general 3D data performs better, especially for lower face AUs. Furthermore it is more robust in detecting low intensity AUs. Also, we show that generative and discriminative classifiers perform on a par with 3D data. Finally, we evaluate fusion of the two modalities. The highest detection rate was achieved by fusion, which is 97.1 area under the ROC curve. This score was 95.4 for 3D and 93.5 for 2D modality. 1
    corecore